3 research outputs found

    Vision-based deep execution monitoring

    Full text link
    Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning

    Visual search and recognition for robot task execution and monitoring

    Full text link
    Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a "common sense" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy

    Human motion primitive discovery and recognition

    No full text
    We present a novel framework for the automatic discovery and recognition of human motion primitives from motion capture data. Human motion primitives are discovered by optimizing the 'motion flux', a quantity which depends on the motion of a group of skeletal joints. Models of each primitive category are computed via non-parametric Bayes methods and recognition is performed based on their geometric properties. A normalization of the primitives is proposed in order to make them invariant with respect to anatomical variations and data sampling rate. Using our framework we build a publicly available dataset of human motion primitives based on motion capture sequences taken from well-known datasets. We expect that our framework, by providing an objective way for discovering and categorizing human motion, will be a useful tool in numerous research fields related to Robotics including human inspired motion generation, learning by demonstration, and intuitive human-robot interaction
    corecore